Your browser doesn't support javascript.
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Adicionar filtros

Ano de publicação
Tipo de documento
Intervalo de ano
1.
Front Public Health ; 10: 982289, 2022.
Artigo em Inglês | MEDLINE | ID: covidwho-2215416

RESUMO

The outbreak of coronavirus disease 2019 (COVID-19) has caused massive infections and large death tolls worldwide. Despite many studies on the clinical characteristics and the treatment plans of COVID-19, they rarely conduct in-depth prognostic research on leveraging consecutive rounds of multimodal clinical examination and laboratory test data to facilitate clinical decision-making for the treatment of COVID-19. To address this issue, we propose a multistage multimodal deep learning (MMDL) model to (1) first assess the patient's current condition (i.e., the mild and severe symptoms), then (2) give early warnings to patients with mild symptoms who are at high risk to develop severe illness. In MMDL, we build a sequential stage-wise learning architecture whose design philosophy embodies the model's predicted outcome and does not only depend on the current situation but also the history. Concretely, we meticulously combine the latest round of multimodal clinical data and the decayed past information to make assessments and predictions. In each round (stage), we design a two-layer multimodal feature extractor to extract the latent feature representation across different modalities of clinical data, including patient demographics, clinical manifestation, and 11 modalities of laboratory test results. We conduct experiments on a clinical dataset consisting of 216 COVID-19 patients that have passed the ethical review of the medical ethics committee. Experimental results validate our assumption that sequential stage-wise learning outperforms single-stage learning, but history long ago has little influence on the learning outcome. Also, comparison tests show the advantage of multimodal learning. MMDL with multimodal inputs can beat any reduced model with single-modal inputs only. In addition, we have deployed the prototype of MMDL in a hospital for clinical comparison tests and to assist doctors in clinical diagnosis.


Assuntos
COVID-19 , Aprendizado Profundo , Humanos , Gravidade do Paciente , Pacientes , Surtos de Doenças
2.
Frontiers in public health ; 10, 2022.
Artigo em Inglês | EuropePMC | ID: covidwho-2147426

RESUMO

The outbreak of coronavirus disease 2019 (COVID-19) has caused massive infections and large death tolls worldwide. Despite many studies on the clinical characteristics and the treatment plans of COVID-19, they rarely conduct in-depth prognostic research on leveraging consecutive rounds of multimodal clinical examination and laboratory test data to facilitate clinical decision-making for the treatment of COVID-19. To address this issue, we propose a multistage multimodal deep learning (MMDL) model to (1) first assess the patient's current condition (i.e., the mild and severe symptoms), then (2) give early warnings to patients with mild symptoms who are at high risk to develop severe illness. In MMDL, we build a sequential stage-wise learning architecture whose design philosophy embodies the model's predicted outcome and does not only depend on the current situation but also the history. Concretely, we meticulously combine the latest round of multimodal clinical data and the decayed past information to make assessments and predictions. In each round (stage), we design a two-layer multimodal feature extractor to extract the latent feature representation across different modalities of clinical data, including patient demographics, clinical manifestation, and 11 modalities of laboratory test results. We conduct experiments on a clinical dataset consisting of 216 COVID-19 patients that have passed the ethical review of the medical ethics committee. Experimental results validate our assumption that sequential stage-wise learning outperforms single-stage learning, but history long ago has little influence on the learning outcome. Also, comparison tests show the advantage of multimodal learning. MMDL with multimodal inputs can beat any reduced model with single-modal inputs only. In addition, we have deployed the prototype of MMDL in a hospital for clinical comparison tests and to assist doctors in clinical diagnosis.

3.
Future Generation Computer Systems ; 2021.
Artigo em Inglês | ScienceDirect | ID: covidwho-1364020

RESUMO

The human imperceptible adversarial examples crafted by ℓ0-norm attacks, which aims to minimize ℓ0 distance from the original image, thereby misleading deep neural network classifiers into the wrong classification. Prior works of tackling ℓ0 attacks can neither eliminate perturbed pixels nor improve the performance of the classifier in the recovered low-quality images. To address the issue, we propose a novel method, called space transformation pixel defender (STPD), to transform any image into a latent space to separate the perturbed pixels from the normal pixels. In particular, this strategy uses a set of one-class classifiers, including Isolation Forest and Elliptic Envelope, to locate the perturbed pixels from adversarial examples. The value of the neighboring normal pixels is then used to replace the perturbed pixels, which hold more than half of the votes from these one-class classifiers. We use our proposed strategy to successfully defend against well-known ℓ0-norm adversarial examples in the image classification settings. We show experimental results under the One-pixel Attack (OPA), the Jacobian-based Saliency Map Attack (JSMA), and the Carlini Wagner (CW) ℓ0-norm attack on CIFAR-10, COVID-CT, and ImageNet datasets. Our experimental results show that our approach can effectively defend against ℓ0-norm attacks compared with the most popular defense techniques.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA